"Google's neural networks have achieved the dream of CSI viewers everywhere: the company has revealed a new AI system capable of "enhancing" an eight-pixel square image, increasing the resolution 16-fold and effectively restoring lost data.
The neural network could be used to increase the resolution of blurred or pixelated faces, in a way previously thought impossible; a similar system was demonstrated for enhancing images of bedrooms, again creating a 32x32 pixel image from an 8x8 one."
"In Neural Photo Editing With Introspective Adversarial Networks, a group of University of Edinburgh engineers and a private research colleague describe a method for using "introspective adversarial networks" to edit images in realtime, which they demonstrate in an open project called "Neural Photo Editor" that "enhances" photos by predicting what should be under your brush."
"There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)-made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains-often seem to mirror not just human intelligence but also human inexplicability."
"Affectiva is running a program that pays drivers to help train its emotion-recognition system. The company sends drivers a kit including cameras and other sensors to place within their vehicles. These record a person's facial expressions, gestures, and tone of voice on the road. That data is then labeled by trained specialists for a range of emotions, and fed into deep neural networks."
""This is the first time such machine learning tools have been used in this context," says Fluri, "and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications.""
"The researchers, Tero Karras, Samuli Laine, and Timo Aila, came up with a new way of constructing a generative adversarial network, or GAN.
GANs employ two dueling neural networks to train a computer to learn the nature of a data set well enough to generate convincing fakes. When applied to images, this provides a way to generate often highly realistic fakery. The same Nvidia researchers have previously used the technique to create artificial celebrities (read our profile of the inventor of GANs, Ian Goodfellow)."
"This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that they convey significant information about material properties and physical interactions."
"At the heart of the problem that troubles Ming is the training that computer engineers receive and their uncritical faith in AI. Too often, she says, their approach to a problem is to train a neural network on a mass of data and expect the result to work fine. She berates companies for failing to engage with the problem first - applying what is already known about good employees and successful students, for example - before applying the AI."
"showing how they can create a 40cm x 40cm "patch" that fools a convoluted neural network classifier that is otherwise a good tool for identifying humans into thinking that a person is not a person -- something that could be used to defeat AI-based security camera systems. They theorize that the could just print the patch on a t-shirt and get the same result."
"At Imperial College London, Murray Shanahan and colleagues are working on a way around this problem using an old, unfashionable technique called symbolic AI. "Basically this meant an engineer labelled everything for the AI," says Shanahan. His idea is to combine this with modern machine learning.
Symbolic AI never took off, because manually describing everything quickly proved overwhelming. Modern AI has overcome that problem by using neural networks, which learn their own representations of the world around them. "They decide what is salient," says Marta Garnelo, also at Imperial College."
"Neural networks can find telltale patterns in a person's gait that can be used to recognize and identify them with almost perfect accuracy, according to new research published in IEEE Transactions on Pattern Analysis and Machine Intelligence. The new system, called SfootBD, is nearly 380 times more accurate than previous methods, and it doesn't require a person to go barefoot in order to work. It's less invasive than other behavioral biometric verification systems, such as retinal scanners or fingerprinting, but its passive nature could make it a bigger privacy concern, since it could be used covertly."
"Researchers from MIT and Google recently showed off a machine learning algorithm capable of automatically retouching photos just like a professional photographer. Snap a photo and the neural network identifies exactly how to make it look better-increase contrast a smidge, tone down brightness, whatever-and apply the changes in less than 20 milliseconds."
"Whereas most machine learning algorithms can't hone their skills beyond an initial training period, the researchers say the new approach, called a liquid neural network, has a kind of built-in "neuroplasticity." That is, as it goes about its work-say, in the future, maybe driving a car or directing a robot-it can learn from experience and adjust its connections on the fly."
"Rather than being "taught" medical knowledge, the AI was asked to learn using unsupervised deep neural networks, known as autoencoders, without being given any medical knowledge. The researchers developed a method for translating the features found by the AI-only numbers initially-into high-resolution images that can be understood by humans."